Goto

Collaborating Authors

 self-driving system


General Motors' 'Eyes-Off' System Begs the Question: What Happens When Cars Go AI?

WIRED

General Motors' 'Eyes-Off' System Begs the Question: What Happens When Cars Go AI? General Motors' new self-driving system will let the driver speed down the highway without looking at the road. It's one of several features enabled by the adoption of machine intelligence in cars. A new self-driving system coming to Cadillac Escalades will handle the driving on approved highways, enabling the driver to do basically anything they want behind the wheel. General Motors is launching another salvo in the self-driving wars. In 2028, the automaker announced today, it will roll out what it's calling an "eyes-off" driving system on the electric Cadillac Escalade IQ.


Moving Forward: A Review of Autonomous Driving Software and Hardware Systems

Wang, Xu, Maleki, Mohammad Ali, Azhar, Muhammad Waqar, Trancoso, Pedro

arXiv.org Artificial Intelligence

With their potential to significantly reduce traffic accidents, enhance road safety, optimize traffic flow, and decrease congestion, autonomous driving systems are a major focus of research and development in recent years. Beyond these immediate benefits, they offer long-term advantages in promoting sustainable transportation by reducing emissions and fuel consumption. Achieving a high level of autonomy across diverse conditions requires a comprehensive understanding of the environment. This is accomplished by processing data from sensors such as cameras, radars, and LiDARs through a software stack that relies heavily on machine learning algorithms. These ML models demand significant computational resources and involve large-scale data movement, presenting challenges for hardware to execute them efficiently and at high speed. In this survey, we first outline and highlight the key components of self-driving systems, covering input sensors, commonly used datasets, simulation platforms, and the software architecture. We then explore the underlying hardware platforms that support the execution of these software systems. By presenting a comprehensive view of autonomous driving systems and their increasing demands, particularly for higher levels of autonomy, we analyze the performance and efficiency of scaled-up off-the-shelf GPU/CPU-based systems, emphasizing the challenges within the computational components. Through examples showcasing the diverse computational and memory requirements in the software stack, we demonstrate how more specialized hardware and processing closer to memory can enable more efficient execution with lower latency. Finally, based on current trends and future demands, we conclude by speculating what a future hardware platform for autonomous driving might look like.


US probing Elon Musk's Tesla over self-driving systems

BBC News

NHTSA's preliminary evaluation follows four crash reports involving the use of Tesla's "Full Self-Driving", or FSD, software. The agency said the crashes involved reduced roadway visibility, with fog or glares from the sun. One of the incidents involved a Telsa fatally striking a pedestrian, and another involved someone being injured, NHTSA said. The evaluation aims to determine if Tesla's self-driving systems can detect and appropriately respond to reduced visibility conditions. It also will examine if other self-driving crashes have happened under similar conditions.

  Industry: Transportation > Ground > Road (0.93)

Smile, you're on camera! Self-driving cars are here and they're watching you

The Guardian

If you've spent any time in San Francisco, you might believe we're on the cusp of the self-driving future promised by car makers and the tech industry: a high-tech utopia where roving robot cars pick up and drop off passengers seamlessly and more safely than if they had a human behind the wheel. While the city certainly has one key element down – a small network of driverless cars – the reality is far different and much more awkward and invasive than what the people building the technology once portrayed. What companies pitched were ultra-smart, AI-driven vehicles that make people inside and outside of the cars safer. But in addition to reports that the cars are becoming a frequent impediment to public safety, the always on-and-recording cameras also pose a risk to personal safety, experts say. A new report from Bloomberg reveals that one of the companies behind the self-driving cars that are operating in San Francisco, Google-owned Waymo, has been subject to law enforcement requests for footage that it captured while driving around.


Argo AI Shuts Down: Is 'Self-Driving Cars' a Realistic Dream?

#artificialintelligence

Ford and VW-backed'Argo AI', the autonomous vehicle technology company, is shutting down. The company, which was once hailed as the leader of autonomous driving systems, failed to attract other investors and will now have a part of its resources absorbed by two of its primary investors. In 2017, Ford invested $1 billion in Argo AI eyeing autonomous vehicle technology to be available for commercial use by 2021. But, with heavy cash burn and technology still a "long way off", the company's priorities seemed to have changed. Motor Company also said that there is "potential for attracting revenue streams tied to L3" and that the company will use its resources in deploying the L3 BlueCruise System. Another report pointed out that Ford's investment in Argo AI came at a time when there was a huge hype around autonomous vehicle technology which was still at a nascent stage.


Tesla to REMOVE sensors from new cars in a bet on cameras and AI - amid scrutiny of crashes

Daily Mail - Science & tech

Tesla is removing sensors from its cars as it shifts toward a system based solely on eight cameras that feed information into its self-driving artificial intelligence. Ultrasonic sensors (USS), which emit high-frequency sounds that bounce off of potential obstacles, will in the coming months be phased out of new Model 3 and Model Y vehicles sold in North America, Europe, the Middle East and Taiwan, and then globally. They will be phased out of Model 4 and Model X cars next year. The announcement from the company led by CEO Elon Musk comes as Tesla is facing intense regulatory and legal scrutiny over a series of crashes involving its self-driving system. Data from the National Highway Traffic Safety Administration (NHTSA) identified 392 reported accidents as of May 2022 involving cars with assisted-driver features - out of those, 273 involved Teslas. Tesla is removing sensors from its cars as it shifts toward a system based solely on eight cameras that feed information into its self-driving artificial intelligence.


Tesla unveils new Dojo supercomputer so powerful it tripped the power grid

#artificialintelligence

Tesla has unveiled its latest version of its Dojo supercomputer and it's apparently so powerful that it tripped the power grid in Palo Alto. Dojo is Tesla's own custom supercomputer platform built from the ground up for AI machine learning and more specifically for video training using the video data coming from its fleet of vehicles. The automaker already has a large NVIDIA GPU-based supercomputer that is one of the most powerful in the world, but the new Dojo custom-built computer is using chips and an entire infrastructure designed by Tesla. The custom-built supercomputer is expected to elevate Tesla's capacity to train neural nets using video data, which is critical to its computer vision technology powering its self-driving effort. Last year, at Tesla's AI Day, the company unveiled its Dojo supercomputer, but the company was still ramping up its effort at the time.


Council Post: Advancing Artificial Intelligence And Creating The Technology Of The Future

#artificialintelligence

The global artificial intelligence (AI) market is expected to reach the trillion-dollar mark by 2030, and just as it has done with the global automotive industry, Tesla looks set to absorb a considerable amount of market share. This is all thanks to Dojo, the supercomputer set to drive the most sophisticated (and fastest) AI training machine to date. What is Project Dojo, and why does it matter? Necessity breeds innovation: Tesla's million-plus fleet of vehicles generates huge amounts of data, and the self-driving systems behind them require vast sums of real-world data. The computational demands for training these neural nets are huge, and since Tesla didn't want to be limited by the general-purpose graphics processing units (GPUs) available, it decided to build something better.


A Certifiable Security Patch for Object Tracking in Self-Driving Systems via Historical Deviation Modeling

Pan, Xudong, Xiao, Qifan, Zhang, Mi, Yang, Min

arXiv.org Machine Learning

Self-driving cars (SDC) commonly implement the perception pipeline to detect the surrounding obstacles and track their moving trajectories, which lays the ground for the subsequent driving decision making process. Although the security of obstacle detection in SDC is intensively studied, not until very recently the attackers start to exploit the vulnerability of the tracking module. Compared with solely attacking the object detectors, this new attack strategy influences the driving decision more effectively with less attack budgets. However, little is known on whether the revealed vulnerability remains effective in end-to-end self-driving systems and, if so, how to mitigate the threat. In this paper, we present the first systematic research on the security of object tracking in SDC. Through a comprehensive case study on the full perception pipeline of a popular open-sourced self-driving system, Baidu's Apollo, we prove the mainstream multi-object tracker (MOT) based on Kalman Filter (KF) is unsafe even with an enabled multi-sensor fusion mechanism. Our root cause analysis reveals, the vulnerability is innate to the design of KF-based MOT, which shall error-handle the prediction results from the object detectors yet the adopted KF algorithm is prone to trust the observation more when its deviation from the prediction is larger. To address this design flaw, we propose a simple yet effective security patch for KF-based MOT, the core of which is an adaptive strategy to balance the focus of KF on observations and predictions according to the anomaly index of the observation-prediction deviation, and has certified effectiveness against a generalized hijacking attack model. Extensive evaluation on $4$ KF-based existing MOT implementations (including 2D and 3D, academic and Apollo ones) validate the defense effectiveness and the trivial performance overhead of our approach.


Mercedes to accept legal responsibility for accidents involving self-driving cars

#artificialintelligence

Mercedes has announced that it will take legal responsibility for any crashes that occur while its self-driving systems are engaged. The company is currently in the process of deploying "Drive Pilot" technology for its new S-Class and EQS saloon models, which is "Level 3" for autonomy on a six-tier system devised by Society of Automotive Engineers, ranging from Level 0 (no automated driver assistance) to Level 5 (the car drives itself everywhere without any input from the vehicle occupants). Level 3 autonomy means that drivers may take their hands off the wheel and undertake other tasks, such as reading a book, while the car assumes full control of all driving functions. However, this is only in specific conditions, such as in low-speed traffic on motorways, and the person in the driver's seat must be able to retake control within a few seconds of an alert from the car. This is a big leap from Level 2 autonomy, which requires hands-on-wheel supervision from the driver at all times, and which is currently commonplace on new cars in the form of adaptive cruise control and automated lane-keeping. Some cars from the likes of Audi, Mercedes, BMW, Genesis and Tesla have such advanced systems that they are considered somewhere between Levels 2 and 3 -- dubbed by experts as Level 2 .